Print

IPEN event on “Human oversight of automated decision-making”

IPEN events bring together privacy experts and engineers from public authorities, industry, academia and civil society to discuss relevant challenges and developments for the engineering and technological implementation of data protection and privacy requirements into all phases of the development process.

The EDPS and the University of Karlstad are hosting an Internet Privacy Engineering Network (IPEN) event on "Human supervision of automated decisions" on 3 September 2024.

When: 3 September 2024 - 14:00-18:00 CEST
Where:

Physical attendance: Eva Eriksson lecture hall, Universitetsgatan 2, 651 88 Karlstad, Sweden (registration required – use the following link to register https://apf.axacoair.se/)

 

Online participation: (connection link will be available before the event)

 

See our Data protection notice for further information

Agenda

     
14:00 - 14:20 Welcome introduction

Wojciech Wiewiórowski (EDPS)

Karlstad University

14:20 - 14:35

Keynote speech

“Expectations vs. reality: What we expect human oversight to be and what it really is”

Ben Green, University of Michigan School (remote)
14:35 - 15:30

Panel 1:

“Human oversight in the GDPR and the AI Act”

Ben Wagner, TU Delft (remote)

Mariarosaria Taddeo, Oxford Internet Institute

Claes Granmar, Stockholm University

Moderator: Daniele Nardi (EDPS)

15:30 - 15:50 Coffee break  
15:55 - 16:35

Panel 2:

“Main challenges to effective human oversight”

Sarah Sterz, Saarland University

Leila Methnani, Umeå Universitet

Moderator: Isabel Barberá (Rhite)

16:35 - 17:20

Panel 3:

“Power to the people! Creating conditions for more effective human oversight”

Simone Fischer-Hübner, Karlstad University

Jahna Otterbacher, Open University of Cyprus (OUC)

Moderator: TBD

17:20 - 17:30 Concluding remarks Massimo Attoresi (EDPS)

Human oversight of automated decision-making

EU regulations such as the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA) state that decisions that could have a significant impact on individuals should not be fully automated. Instead, human oversight should be in place to ensure that the decisions supported by automation systems (such as artificial intelligence) are fair and accountable.

An example can be found in Article 22 of the GDPR, which defines that "the data subject shall have the right not to be subject to a decision based solely on automated processing which produces legal effects or similarly significantly affects him or her".

Another example is the Article 14(2) of the AIA, which requires human oversight of high-risk AI systems to “prevent or minimise the risks to health, safety or fundamental rights”. This is also supported by recital 73 of the AIA, which provides a rationale by stating that “appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service”.

The 2019 Ethics guidelines for trustworthy AI include seven non-binding ethical principles for AI, which are intended to help ensure that AI is trustworthy and ethically sound. One of these seven principles is “Human agency and oversight”.

However, some authors point out that there could be a lack of clarity about what “human oversight” means, what can be expected from it, and how can it be implemented efficiently:

 

“Regulators presumably put humans in the loop because they think they will do something there. What, precisely, are their assumptions about human decision-making and the ways in which it differs from machines?” 

“Adding a «human in the loop» does not cleanse away problematic decisions and can make them worse”.
Matsumi, H., & Solove, D. J. (2023). The Prediction Society: Algorithms and the Problems of Forecasting the Future.

“Human oversight policies shift responsibility for algorithmic systems from agency leaders and technology vendors to human operators.”
Green, B. (2022). The flaws of policies requiring human oversight of government algorithms

When a human is placed in the loop carelessly, there is a high likelihood that the human will be disempowered, ineffective, or even create or compound system errors.
Crootof, R., Kaminski, M. E., Price, W., & Nicholson, I. I. (2023). Humans in the Loop.

 

Real-life events such as the Three Mile Island accident in 1979, and the Air France Flight 447 crash in 2009 show that when human operators are presented with inaccurate information they are not only unable to monitor systems effectively, but can actually exacerbate the potential consequences. 

In other situations, such as when Tesla's self-driving cars reportedly handed over control to humans seconds before impact, humans are put in a situation where they are not prepared or able to make a significant enough decision to correct the behaviour of the system.

Although organisations, and particularly decision makers, who choose to use automated decision making systems must be held accountable for the automated decisions their systems make, it is often the case that the human operators interacting with the systems at the end of the pipeline are the ones who are blamed for any poor outcomes.

The aim of the IPEN event is to promote discussion on questions such as the following:

  • Don't the requirements for human oversight shift the burden of responsibility from the systems and their providers to the people who operate them?
  • Could there be an unavoidable liability of the operator? Let’s suppose a human operator chooses to follow the system's suggestion and turns out to be wrong. Wouldn’t that be seen as an inability of the user to understand the limitations of the system? 
    And, on the contrary, if the operator decides against the system's suggestion and proves wrong as well, wouldn’t that result in an even worse outcome for the operator, who had clear indicators to decide otherwise?
  • Article 14 (2) of the AIA (Human oversight) provides that “human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used”. Are the provisions of Article 14 clear enough about what oversight measures are expected from humans/providers and what their responsibilities should be?
  • If human oversight is a risk mitigation control, how can we measure its impact?
  • What does “appropriate” human oversight mean? What are the characteristics that should be taken into account to assess if a human oversight procedure is appropriate or not?
  • Could regulations requiring human oversight be paving the way for the production of defective systems?
  • How should this oversight happen? In the testing and monitoring of the system? Are we talking about escalation procedures like in a call centre?
  • What skills should humans have? Are we talking of engineers that know how an AI system works or are we talking about humanists?
  • What would be the legal implications if in the end the AI system cause harm? Who will be accountable legally and morally the user of the system, the provider of the system, the overseer of the system?
  • Incorporating humans into the process is costly, may not be scalable and could reduce the speed of systems, so AI deployers might not be inclined to use human oversight. Where should the line be drawn?

Speakers

Ben Green, Michigan School of Information

Ben Green is an assistant professor in the University of Michigan School of Information and an assistant professor (by courtesy) in the Gerald R. Ford School of Public Policy. He holds a PhD in Applied Mathematics from Harvard University, with a secondary field in Science, Technology, and Society.

Ben studies the ethics of government algorithms, with a focus on algorithmic fairness, human-algorithm interactions, and AI regulation. Through his research, Ben aims to support design and governance practices that prevent algorithmic harms and advance social justice.

His first book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, was published in 2019 by MIT Press. He is working on a second book, Algorithmic Realism: Data Science Practices to Promote Social Justice.

Ben is also a faculty associate at the Berkman Klein Center for Internet & Society at Harvard and a fellow at the Center for Democracy & Technology.

Ben Wagner, TU Delft

Ben Wagner is Associate Professor Human Rights and Technology at the Faculty of Technology, Policy and Management and Director of the AI ​​Futures Lab at TU Delft. He is also Professor of Media, Technology and Society at Inholland. His research focuses on the governance of socio-legal systems, in particular human rights in digital technologies, and designing more accountable decision-support systems.

He is a visiting researcher at the Human Centered Computing Group at Oxford University, advisory board member of the data science journal Patterns and on the International Scientific Committee of the UKRI Trustworthy Autonomous Systems Hub. Ben spends 3 days a week working at TU Delft and 2 days a week working at Inholland.

Previously, Ben served as founding Director of the Center for Internet & Human Rights at European University Viadrina, Director of the Sustainable Computing Lab at Vienna University of Economics and member of the Advisory Group of the European Union Agency for Network and Information Security (ENISA). He holds a PhD in Political and Social Sciences from European University Institute in Florence.

Mariarosaria Taddeo, University of Oxford

Professor Taddeo is Professor of Digital Ethics and Defence Technologies. She is also the Programme Director of the DPhil in Information, Communication and the Social Sciences at the Oxford Internet Institute and Dslt Ethics Fellow at the Alan Turing Institute.

Her recent work focuses on the ethics and governance of digital technologies, and ranges from designing governance measures to leverage artificial intelligence (AI) to addressing the ethical challenges of using defence technology in defence, ethics of cybersecurity, and governance of cyber conflicts. She has published more than 150 articles in this area, focusing on topics like trustworthy digital technologies, governance of digital innovation, ethical governance of AI for national defence, ethics of cybersecurity (complete list of her publications). Her work has been published in major journals like Nature, Nature Machine Intelligence, Science, and Science Robotics.

Professor Taddeo has led, leads, and co-leads several projects in the area of Digital Ethics successfully. Most notably, she is the PI of a current project on the ‘Ethical Principles for the Use of AI for National Security and Defence’ funded by Dstl (the UK Defence Science and Technology Laboratory). She was Co-I in an EPSRC project, which funded the PETRAS IoT Research Hub.  She was PI on a project funded by the NATO Cooperative Cyber Defence Centre of Excellence (CDD COE) to define ethical guidance for the regulation of cyber conflicts. Since 2017 she has been Co-PI on research projects developed at the Oxford Internet Institute such as ‘Digital Well-Being’; ‘The Ethics of Recommender Systems’; ‘Posthumous Medical Data Donation’.

Professor Taddeo is an internationally renowned scholar. She is the lead expert on the CEPS Task Force on ‘Artificial Intelligence and Cybersecurity’, CEPS is a major European think-tank informing EU policies on cybersecurity. Between 2018 and 2020 represents the UK of the NATO Human Factors and Medicine Exploratory Team (NATO HFM ET) ‘Operational Ethics: Preparation and Interventions for the Future Security Environment’.

Between 2016 and 2018, she was the Oxford Fellow at the Future Council for Cybersecurity of the World Economic Forum, helping to identify the ethical and policy cybersecurity problems that could impair the development of future societies. She has received multiple awards for her work on Digital Ethics, among which the 2010 Simon Award for Outstanding Research in Computing and Philosophy; the 2016 World Technology Award for Ethics. In 2018, InspiringFifty named her among the most inspiring 50 Italian women working in technology. In the same year, ORBIT listed her among the top 100 women working on the Ethics of AI globally, both in 2018 and 2020. She has been named one of the twelve 2020 “Outstanding Rising Talents” by the Womens’ Forum for Economy and Society. In 2020 and 2023, CoputerWeekly listed her among the top 100 most influential women in technology in the UK. She also serves as editor-in-chief of Minds & Machines (SpringerNature).

Before joining the OII, Taddeo was Research Fellow in Cyber Security and Ethics at the Department of Politics and International Studies, University of Warwick. From 2010 to 2012 she held a Marie Curie Fellowship at the University of Hertfordshire, where she was working on information warfare and its ethical implications.

She holds a PhD (Doctor Europeus) in Philosophy from the University of Padua. Her PhD thesis focused on the epistemic and ethical implications of trust in artificial systems.

Claes Granmar, Stockholm University

Researcher and associate professor (docent) specialised in European Law (EU law and the ECHR system); alumni at Oxford University, the UK (visiting fellow at the Institute for European and Comparative Law 2017/2018 and member of the SCR at the college Lady Margret Hall); Honorary Fellow at Melbourne Law School, Australia, 2019; former fellow at the Houser Global Law School Program, New York University (NYU), USA, and at the Max Planck Institute in Munich, Germany.

Claes wrote his PhD on trade mark rights and brand competition defebded at the European University Institute (EUI) in Florence, Italy, and at Stockholm University. In the course of writing his dissertation, he worked for two periods at the EFTA Court in Luxembourg.

After defending his disseratation at the EUI, Claes has extended his field of research into EU internal market law and EU external trade relations law and policy in the context of constitutional EU law.

He is focusing on digital globalisation and the EU digitial internal market.

Sarah Sterz, Saarland University

Sarah Sterz is a member of the chair since July 2019 and a PhD student with Prof. Holger Hermanns. She is currently the main lecturer for »Ethics for Nerds«, where she teaches relevant aspects of philosophy and ethics to computer scientists. Due to her dual background in both philosophy and computer science, she serves as a member of the Ethical Review Board of the faculty of Mathematics and Computer Science and also of the Commission on the Ethics of Security-Relevant Research at Saarland University.

Received her B.Sc. in computer science in 2017 and an M.A. in philosophy in 2019, both from Saarland University. As a student, she spent some time at the University of Luxembourg and the University of St Andrews, Scotland.

Leila Methnani, Umeå University

Leila Methnani is a Doctoral student at the Department of Computing Science of the Umeå University.

She authored and co-authored the following works «Who's in charge here? A survey on trustworthy AI in variable autonomy robotic systems» (Methnani, Leila; Chiou, Manolis; Dignum, Virginia; et al. -

2024), «Clash of the explainers: argumentation for context-appropriate explanations» (Methnani, Leila; Dignum, Virginia; Theodorou, Andreas - 2023), «Operationalising AI ethics: conducting socio-technical assessment» (Methnani, Leila; Brännström, Mattias; Theodorou, Andreas - 2022), «Embracing AWKWARD! A Hybrid Architecture for Adjustable Socially-Aware Agents (Methnani, Leila - 2022), «Embracing AWKWARD! Real-time Adjustment of Reactive Plans Using Social Norms», (Methnani, Leila; Antoniades, Andreas; Theodorou, Andreas - 2021), «Let Me Take Over: Variable Autonomy for Meaningful Human Control», (Methnani, Leila; Aler Tubella, Andrea; Dignum, Virginia; et al. - 2021)

Simone Fischer-Hübner, Karlstad University

Simone Fischer-Hübner is an expert in IT security and a professor at the Faculty of Computer Science at Karlstad University, Sweden.

Simone Fischer-Hübner has been a member of the Cyber Security Council at the Cybersecurity at the Swedish Civil and IT Security Authority since 2011. She is also the Swedish representative and vice-chair of IFIP Technical Committee 11 on Information Security and Privacy, a board member of the Swedish Forum för Dataskydd, an advisory board member of PETS (Privacy Enhancing Technologies Symposium) and NordSec, and coordinator of the Swedish IT Security Network for PhD Students (SWITS). Member of the MSB (Swedish Civil Contingency Agency) Cyber Security Advisory Board (MSB:s Cybersäkerhetsråd) - since 2010.

She received the IFIP Silver Core Award in 201, the IFIP William Winsborough Award in 2016 and an honorary doctorate from the Chalmers University of Technology in 2021.

Jahna Otterbacher, Open University of Cyprus

Dr. Otterbacher holds a Master in Statistics and a Ph.D. in Information from the University of Michigan at Ann Arbor (USA). She also holds a master  in Applied  Linguistics from Boston University. Prior to joining the OUC in 2012, she worked as a Visiting Lecturer of Management Information Systems at the University of Cyprus (2006-2009) and an Assistant Professor at Illinois Institute of Technology in Chicago (2010-2012).

At OUC, she leads the Cyprus Center for Algorithmic Transparency (CyCAT), which conducts interdisciplinary  research  focused on promoting technical and educational solutions for promoting algorithmic transparency and literacy. Concurrent to this, Jahna co-leads the Fairness and Ethics in AI- Human Interaction (fAIre) group at CYENS, a new center of excellence and innovation in Nicosia, Cyprus,  in collaboration   with two international Advanced Partners, UCL and MPI. Her research has been funded by the EU’s Horizon 2020 Research and Innovation Program (under Grant Agreements No. 739578 (RISE) & No. 810105 (CyCAT)), as well as the Cyprus Research and Innovation Foundation (under grants EXCELLENCE/0918/0086 (DESCANT) and EXCELLENCE/0421/0360 (KeepA(I)n)). In 2022, she was added to the Stanford-Elsevier list of the top- cited scientists in the area of Artificial Intelligence.

Her main research areas include algorithmic transparency & auditing, social and ethical considerations in artificial intelligence, human computation and crowdsourcing, and analysis and modelling of users’ behaviour

Wojciech Wiewiórowski, European Data Protection Supervisor

Wojciech Wiewiórowski has been the European Data Protection Supervisor (EDPS) since December 6th 2019.

He is also an adjunct professor in the Faculty of Law and Administration of the University of Gdańsk. He was, among others, an adviser in the field of e-government and information society for the Minister of Interior and Administration, and the Director of the Informatisation Department at the Ministry of Interior and Administration in Poland. He also represented Poland in the committee on Interoperability Solutions for European Public Administrations (the ISA Committee) assisting the European Commission.

Wojciech Wiewiórowski was also the Inspector General for the Protection of Personal Data (Polish Data Protection Commissioner) between 2010-2014 and the Vice Chair of the Working Party Article 29 in 2014. In December 2014, he was appointed Assistant European Data Protection Supervisor. After the death of the Supervisor - Giovanni Buttarelli in August 2019 - he replaced Mr. Buttarelli as acting EDPS.

Moderators

Isabel Barberá, Rhite

Isabel Barberá is the co-founder of Rhite, a legal & technical consultancy firm based in The Netherlands that specialises in Responsible AI and Privacy Engineering. She is a long-time advocate of privacy and security by design, and has always been passionate about the protection of human rights.

She is the author of the open-source AI assessment tool PLOT4.ai, a threat modeling library and methodology that helps organisations build Responsible AI systems. She is member of the ENISA Ad Hoc Working Group on Data Protection Engineering.

Daniele Nardi, Legal Service at EDPS

Daniele Nardi is from Rome, Italy. After graduating in law at 'La Sapienza' University, he attended an MA at the College of Europe (Natolin) in European Advanced Multidisciplinary Studies. After working briefly at the law firm Paul Hastings LLP, he joined the Legal Service of the European Commission in 2008, where he dealt first with agriculture and fisheries and then for the last six years, the protection of personal data. He has represented the Commission before the Court of Justice in more than 100 cases and was involved in preparing legislative proposals and negotiations. Since July 2021, he is the Legal Service officer at the European Data Protection Supervisor, where he coordinates litigation and inter-institutional and other horizontal matters.

Massimo Attoresi, Technology & Privacy at EDPS

His areas of scientific activity include first of all Polish and European IT law, processing and security of information, legal information retrieval systems, informatisation of public administration, and application of new IT tools (semantic web, legal ontologies, cloud, blockchain) in legal information processing.

Massimo deputy Head of the Technology & Privacy unit of the EDPS, which he joined in 2012. From October 2014 to September 2020 he was also the Data Protection Officer of the EDPS. He provides advice on the impact of technology on privacy and other fundamental rights due to the processing of personal data. Among the topics he has focused on: cloud computing, online tracking and profiling, privacy of electronic communications, privacy and data protection by design and by default, data protection engineering, DPIAs. He graduated as an Electronic Engineer. After some years in the private ICT sector, he joined in 2002 the European Anti-fraud Office. From 2007 to 2012 he worked as Data Protection Coordinator and Local Informatics Security Officer in a Directorate General of the European Commission.

Join us in this discussion on 3 September!